Goto

Collaborating Authors

 saccader model




Reviews: Saccader: Improving Accuracy of Hard Attention Models for Vision

Neural Information Processing Systems

This paper addresses the problem of training hard-attention mechanisms on image classification. To do so, it introduces a new hard-attention layer (called a Saccader cell) with a pretraining procedure that improves performance. More importantly, they show that the approch is more interpretable requiring fewer glimpses than other methods while outperforming other similar approches and being close in performance to non-intepretable models such as ResNet. Originality: The proposed Saccader model is original and compares favorably to state of the art works in term of performance and also, more importantly, interpretability. Related work has been cited adequately.


Saccader: Improving Accuracy of Hard Attention Models for Vision

Elsayed, Gamaleldin F., Kornblith, Simon, Le, Quoc V.

arXiv.org Machine Learning

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, they are often regarded as black boxes. Because they compute a nonlinear function of the entire input image, their decisions are difficult to interpret. One approach that offers some level of interpretability by design is \textit{hard attention}, which selects only relevant portions of the image. However, training hard attention models with only class label supervision is challenging, and hard attention has proved difficult to scale to complex datasets. Here, we propose a novel hard attention model, which we term Saccader, as well as a self-supervised pretraining procedure for this model that does not suffer from optimization challenges. Through pretraining and policy gradient optimization, the Saccader model estimates the relevance of different image patches to the downstream task, and uses a novel cell to select patches to classify at different times. Our approach achieves high accuracy on ImageNet while providing more interpretable predictions.